212 research outputs found

    Analysis and Design of Singular Markovian Jump Systems

    Get PDF
    This monograph is an up-to-date presentation of the analysis and design of singular Markovian jump systems (SMJSs) in which the transition rate matrix of the underlying systems is generally uncertain, partially unknown and designed. The problems addressed include stability, stabilization, H? control and filtering, observer design, and adaptive control. applications of Markov process are investigated by using Lyapunov theory, linear matrix inequalities (LMIs), S-procedure and the stochastic Barbalat’s Lemma, among other techniques. Features of the book include: · study of the stability problem for SMJSs with general transition rate matrices (TRMs); · stabilization for SMJSs by TRM design, noise control, proportional-derivative and partially mode-dependent control, in terms of LMIs with and without equation constraints; · mode-dependent and mode-independent H? control solutions with development of a type of disordered controller; · observer-based controllers of SMJSs in which both the designed observer and controller are either mode-dependent or mode-independent; · consideration of robust H? filtering in terms of uncertain TRM or filter parameters leading to a method for totally mode-independent filtering · development of LMI-based conditions for a class of adaptive state feedback controllers with almost-certainly-bounded estimated error and almost-certainly-asymptotically-stable corresponding closed-loop system states · applications of Markov process on singular systems with norm bounded uncertainties and time-varying delays Analysis and Design of Singular Markovian Jump Systems contains valuable reference material for academic researchers wishing to explore the area. The contents are also suitable for a one-semester graduate course

    JudgeLM: Fine-tuned Large Language Models are Scalable Judges

    Full text link
    Evaluating Large Language Models (LLMs) in open-ended scenarios is challenging because existing benchmarks and metrics can not measure them comprehensively. To address this problem, we propose to fine-tune LLMs as scalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in open-ended benchmarks. We first propose a comprehensive, large-scale, high-quality dataset containing task seeds, LLMs-generated answers, and GPT-4-generated judgments for fine-tuning high-performance judges, as well as a new benchmark for evaluating the judges. We train JudgeLM at different scales from 7B, 13B, to 33B parameters, and conduct a systematic analysis of its capabilities and behaviors. We then analyze the key biases in fine-tuning LLM as a judge and consider them as position bias, knowledge bias, and format bias. To address these issues, JudgeLM introduces a bag of techniques including swap augmentation, reference support, and reference drop, which clearly enhance the judge's performance. JudgeLM obtains the state-of-the-art judge performance on both the existing PandaLM benchmark and our proposed new benchmark. Our JudgeLM is efficient and the JudgeLM-7B only needs 3 minutes to judge 5K samples with 8 A100 GPUs. JudgeLM obtains high agreement with the teacher judge, achieving an agreement exceeding 90% that even surpasses human-to-human agreement. JudgeLM also demonstrates extended capabilities in being judges of the single answer, multimodal models, multiple answers, and multi-turn chat.Comment: 30 pages, 23 figure
    corecore